User-driven counterfactual generator: a human centered exploration Beretta I., Cappuccio E., Marchiori Manerba M. In this paper, we critically examine the limitations of the techno-solutionist approach to explanations in the context of counterfactual generation, reaffirming interactivity as a core value in the explanation interface between the model and the userSource: xAI-2023 - 1st World Conference on eXplainable Artificial, pp. 83–88, Lisbon, Portugal, 26-28/07/2023
Fairness auditing, explanation and debiasing in linguistic data and language models Marchiori Manerba M. This research proposal is framed in the interdisciplinary exploration of the socio-cultural implications
that AI exerts on individual and groups. The focus concerns contexts where models can amplify
discriminations through algorithmic biases, e.g., in recommendation and ranking systems or abusive
language detection classifiers, and the debiasing of their automated decisions to become beneficial and
just for everyone. To address these issues, the main objective of the proposed research project is to
develop a framework to perform fairness auditing and debiasing of both classifiers and datasets, starting
with, but not limited to, abusive language detection, thus broadening the approach toward other NLP
tasks. Ultimately, by questioning the effectiveness of adjusting and debiasing existing resources, the
project aims at developing truly inclusive, fair, and explainable models by design.Source: xAI-2023 - 1st World Conference on eXplainable Artificial, pp. 241–248, Lisbon, Portugal, 26-28/07/2023 Project(s): XAI